Markov Chain Monte Carlo Confidence Intervals

نویسنده

  • YVES F. ATCHADÉ
چکیده

For a reversible and ergodic Markov chain {Xn, n ≥ 0} with invariant distribution π, we show that a valid confidence interval for π(h) can be constructed whenever the asymptotic variance σ P (h) is finite and positive. We do not impose any additional condition on the convergence rate of the Markov chain. The confidence interval is derived using the so-called fixed-b lag-window estimator of σ P (h). We also derive a result that suggests that the proposed confidence interval procedure converges faster than classical confidence interval procedures based on the Gaussian distribution and standard central limit theorems for Markov chains.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Markov Chain Monte Carlo Confidence Intervals

In Adaptive Markov Chain Monte Carlo (AMCMC) simulation, classical estimators of asymptotic variances are inconsistent in general. In this work we establish that despite this inconsistency, confidence interval procedures based on these estimators remain consistent. We study two classes of confidence intervals, one based on the standard Gaussian limit theory, and the class of so-called fixed-b c...

متن کامل

Assigning Confidence Intervals to Neural Network Predictions

Abstract This report reviews three possible approaches to the assignment of confidence intervals to feed-forward neural networks, namely, bootstrap estimation, maximum likelihood estimation, and Bayesian statistics. The report concludes with a proposal for mixture modelling via Markov Chain Monte Carlo sampling to enable non-Gaussian variances to be modelled without introducing the bias caused ...

متن کامل

Classical and Bayesian estimation of Weibull distribution in presence of outliers

Abstract: This study deals with the classical and Bayesian estimation of the parameters of Weibull distribution in presence of outlier. In classical setup, the maximum likelihood estimates of the model parameters along with their standard errors (SEs) and confidence intervals are computed. Bayes estimates along with their posterior SEs and highest posterior density credible intervals of the par...

متن کامل

How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling

Given a probability law π on a set S and a function g : S → R, suppose one wants to estimate the mean ḡ = ∫ g dπ. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution π. Typically one has no a priori bounds on the chain’s mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for ḡ. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015